Listen to the podcast here:

Artificial intelligence is a technological evolution that's reshaping the future of law. But its rapid ascent contains both benefits and potential pitfalls for the unwary. In this episode, former Vinson & Elkins executive Tim Armstrong visits with Todd Smith and Jody Sanders about the evolving landscape of AI and its potential impact on the legal profession. Tim covers ways AI can change day-to-day tasks, like managing documents and conducting research and discovery. He also shares his personal experience from several decades of watching the shift from traditional practice to the digital forefront. Tim also provides practical insights on how attorneys can engage, learn, and adapt to stay ahead of the curve as AI continues to change the way we practice.

Our guest is Tim Armstrong. Tim is here to talk with us about everyone's favorite topic, AI or Artificial Intelligence, and how it applies to the practice of law. Welcome to the show, Tim.

Thank you very much, Todd. I am glad to join you guys.

Why don't we start with you telling our audience a little bit about yourself, who you are, your background, what leads you to be interested in AI, and qualified to talk about it and educate us on it?

I joined the legal industry many years ago. I had an IT background as well as a financial background, an undergrad in Accounting, and then an MBA in Finance. I joined Vinson & Elkins in their Dallas office at the time in more of a financial management role. I then migrated from the finance side into IT. I did an incredible amount of IT work. Ultimately, I took the role of Chief Information Officer. I got a call to go back to the finance side, moved into the CFO role at V&E, and in 2009, I moved over into the chief operating role.

I've stayed very close to technology over the years. AI is one of those new creative things that takes me back many years ago to when we started talking about knowledge management and how we organized a lot around it. AI will have a lot of potential to help us automate and do what we started to try to do many years ago.

About that time, you bring up knowledge management. That was a time in which law firms were undergoing a pretty serious transformation. They kept track of what they had already done and tried to leverage that going forward. That does seem to be a point where there's a natural fit with AI.

That's true. One of the debates that we used to have a lot on the KM side was we had a number of lawyers. We had a technology oversight committee and developed a knowledge management committee at the time. There was this desire to do this white bar universal search that we would point at all of our data. One of the things people never would appreciate or want to acknowledge was that all of the data wasn't that good. There was still a need to go through, clean it up, and organize it.

People were doing these white bar searches. This was in the days after Google. People were very used to searching. What they classified as a white bar search is going to a line and type. A lot of times, in those returned information, you got a lot of information that you didn't necessarily want or wasn't relevant. We were working on ways to try to rank and add quality to that data. It was hard. It was very difficult. It is like any of these technologies we've done all the years. Lawyers are really interested until we turn the tables back and say, "We need you to help us build it, train, or do whatever," and then they lose interest in a hurry.

Before we launch into some of the details, you played all those different roles at V&E. I can't say that I have come across anybody who has worn quite so many hats in a very large law firm. The fact that you not only were COO but you were the Chief Information Officer and the Chief Financial Officer at various points in time, you've got some marketable skills there.

It kept me busy, for sure.

I bet.

Are you still at V&E?

I'm not. I left V&E in the summer of 2022. We had a large management turnover coming out of 2021. It was a decision point for me on what I wanted to do. After a little over 29 years, I decided it was time for me to branch out and do something else. I've been doing work with a number of other firms on a consulting basis. I am doing some advisory work on the M&A projects that are going out there, which is a nice, interesting term. I get to work with some acquirer firms that are looking at quarries. I get pulled in to do a lot of analysis and review of those firms from an operating and financial capacity. It allows me to take advantage of the skillset that I've developed over the years.

To set the stage here, let's talk about what generative AI is. You could hardly be breathing and reading this episode and not have heard the term, especially if you're in the legal industry. It's so pervasive in all the things that we read in the industry, news, articles, and so forth. It would be beneficial to understand where generative AI fits into overall technology issues and generally what it is. Can you give us a quick explanation of it?

Generative AI gained a lot of traction coming out of 2022 when OpenAI released their ChatGPT model, which GPT, is a Generative Pre-Trained Transformer. There have been a number of iterations of artificial intelligence over the years from artificial intelligence to machine learning. I'll admit. I'll digress a little bit. I've had a great interest in machine learning over the years.

To focus on the financial side and operational side of the firm, firms are producing an incredible number of data points and metrics over the years that it has gotten to the point that it's very hard to consume. In fact, we produce so much information that it's hard to organize and project to the leadership. I started talking to a number of my folks about developing these intelligent triggers, a lot of our KPIs, and all where I didn't have to come in and consume an overwhelming number of data points. I could get information presented and filtered to me in a way so that I would know a little bit about what to look at through these so-called intelligent triggers.

Generative AI is a step beyond that. It goes into deep learning. It has been years back. Probably a lot of the deep learning started in the 2015 or 2016 timeframe. We started hearing more about it with Google and doing a lot of work on the AI side. OpenAI is out there doing some stuff as well. It was really the concept of deep learning. That's where I got interested in a little bit because I wanted to understand.

I started hearing the term deep learning. I wanted to understand what they were doing, how they were training these models, and what they were ultimately using to produce the information that they were. That's when I started to learn more about generative AI. I started to learn about the building of the neural networks and things like that that they were using to train the data. Some of the people that build these complex neural networks, I've read and still don't completely understand how they work at times. All the information that they produce turns in the math and you use the math to develop the generative AI piece. Once these large language models are built, the chat piece of it is what is built to engage with a model.

I want to clarify a little bit of terminology because I'm not an AI guy. If you don't mind, take a few minutes to step us through what some of these terms are. One of the ones you mentioned was deep learning, if you can explain that as well as neural networks. I know those are dense topics, but a high-level view for those of us who are appellate lawyers.

If you think about these things, AI is the overarching piece. Machine language is a part of that. Deep learning is another piece of it. Deep learning is what you see associated with a lot of people who are talking about the large language models and how those are built. Large language models and all of that take an enormous number of compute cycles to build.

What I have said a lot and what Todd would have heard me say during my presentation is a lot of law firms are not really doing the deep learning side. That's where the Harvey's and others are coming into play. If you've heard about what Harvey's doing on the legal side, they're using what I would consider the distributed framework out there coming from Microsoft and others to provide the compute cycles to build these large language models that can be used for the chat exercises that are going on now.

You alluded to me having heard you speak before at a conference here in Austin. That's where I came across you. You've diagrammed out AI, machine learning, and deep learning into a Venn diagram illustration with AI being the big circle. It is a technique which allows machines to mimic human behavior, and machine learning is a subset of that. Deep learning is a subset of that where it starts to mimic human behavior at some point or at least it fools us enough to be fooled if we're talking to a chatbot sometimes. For lay people like me and Jody, sometimes, it is rather difficult to get your mind around this. I appreciate you taking the time to step back a little bit from it. We can follow the conversation, but we want to make sure that our audience has the terminology for their benefit, too.

Picking up from there, let's go back to gen AI and what it does. I don't want to put my own words out there. We're going to talk about the practicalities of it and probably the ultimate question that everybody wants to know, at least for everybody reading this, "Is it going to take our jobs?" Let's take a step back and go through gen AI further in terms of how we're seeing it, what it's capable of doing, and how we're seeing it in our everyday lives.

The generative AI piece is what most folks are going to be interested in. That's what most lawyers are going to be interested in. There are some of us on the backside who will be interested in other variations of the AI that are out there and how we use them. Generative AI is what is really being brought forward and what people are talking about in tools that are being made available. There are some tools out there that I'll jump into a little bit because it is relevant to talk about the whole generative AI piece and the prompt level, which is what people will hear a lot about. With generative AI, you've got these large language models. A large language model is it's trained and it reads an incredible amount of data.

In fact, it's where a lot of the initial lawsuits were established because OpenAI and some of the other LLMs were going out and pulling information from the general internet or whatever and feeding all of these large language models all this information. The people started saying, "That's got my data in it. It's got books that I've written that have been compiled." That was the genesis of some of the lawsuits. Ultimately, with generative AI, it's very easy to go out to OpenAI and create an account. There are a number of these out there where you are presented with this blank prop and you can start asking it questions and doing things to it to get information out. That prompt framework is really what is going to be key.

There have been some firms that have gone out and offered prompt engineering training to their summer associate class. Some of the tools that are coming from NetDocuments are all tools that automate the prompt element and help craft the prompts that are going into gen AI that will help extract information. The prompts will do all kinds of things. There are specific elements in there. There are tons of elements in there. There are a lot of things that you can put and ask in there that will craft a unique response back from whatever large model you happen to be engaged in and interface with. That is really where a lot of the work is being done, particularly on the legal side with whatever tools.

It is about getting the right prompt to ask the right question of generative AI so that it can then generate from the LLM information that is relevant to what you put against it. It's one of those things. If you add a bad or incomplete question, you're going to get not great information out of it. A big, important deal was getting people trained to use these tools for it.

It sounds a lot the analog to electronic research and training people how to use Westlaw and Lexus and getting the right queries there.

It is exactly that. What is going to complicate it more, and this was a point that I made in the discussion that Todd referenced, is everyone has an AI tool. They're not even calling it Gen AI or whatever else. Whatever vendor you're talking to has some element of AI that's built into their solution. One of the things that I told the group in speaking and the people that I've talked to is you can't set up and say, "Here's the AI solution that your firm ought to engage. Here is a string of AI solutions that's coming in a lot of the tools that you already use. They're all going to be priced separately and there's going to be a different cost element to it."

I have the vendors. I don't want to talk too negatively about the vendors, but they look at this as a way for them to generate new revenue streams. It's one of the things that I talk to firms about understanding what your agreements are like, whether you're up for renewal, and how you evaluate product offerings, these tools, and what you bring in. They will add expense to whatever budget, whether it's a practice budget, a technology budget, or whatever else it will cost money.

Going back to the large language models for a minute, you mentioned ChatGPT. That's the one everyone is most familiar with. There are others. How do these vendors determine what makes up the model that they're going to use? Do they create their own or are they using some universal model?

This is something else, to back up a little bit, what I caution the firms about. Whatever you talk to your vendor about, you need to understand what they're using. What LLM are they pointing to? There are a number of different LLMs out there and they're changing all the time. In fact, there's a concept that you'll hear discussed more called new frontier models. These are new models that are coming out from the OpenAIs. It is the Gemini from Google and other things. There are a number of these LLMs out there that are growing and changing.

We'll digress a little bit more. I apologize. You see these firms that are coming out and saying, "We have our own ChatGPT," but when you really peel back the layers of it, they're building it on something from Microsoft on Microsoft Azure. The tech folks will appreciate it. It is a foundational element out there. Microsoft has a relationship with OpenAI. Part of what I would consider their intelligence genome, for lack of a better way to describe it, OpenAI is there.

When these firms are talking about that they're using the OpenAI model and they're using it while they're not exposing their data, they're really using it inside that Azure framework. Anytime that I see any solution, I start to tear it down to understand what large language model are you using and what have you done to it. Have you done any specific training with it?

We're seeing the concept, particularly around contract lifecycle management. We were talking about the use of frozen models, which means that they're taking one of these large language models. That is what you're connecting to and using, but they're not allowing that model to change. It's not being updated. It's not learning from what you're asking it to do, which is addressing some of the privacy concerns and all that that you don't have information spilling back into the model.

The models are really updated from an engineering standpoint. It's very controlled. You know going on with it. It is understanding what that large language model is, what its basis is, whether it is OpenAI models or one of the other large vendors that is building it, and what's been done in that model. There are so many issues with the models or potential issues. I've read about certification programs against the model that will guarantee the viability and integrity of that model.

At the end of the day, it is reading and converting allotted what I call unstructured data out there and turning it into math. It can be influenced, and that is part of the concern. I won't get into too much of that yet. That's probably not part of the conversation. The bottom line is these tools and understanding those models, where it's coming from, what their genesis is, and how it's been impacted is incredibly important for anybody to understand. It is very important for law firms to understand because of a lot of the guarantees and assurances that you need to make about the data, what information is coming in, or how you're using it.

This is another vendor that you're engaging at some level if you're a sole practitioner or an AmLaw 100 firm. At some level, you've got to have a degree of comfort with the vendor that you're using. You've mentioned OpenAI, which is the company behind ChatGPT. They seem to have what I don't want to call market share, but maybe that's the right terminology. Everyone is more familiar with them than many others that are coming out there. Where would it even start? This may be a difficult question to answer but where would a firm even start if it was looking to use a ChatGPT-type functionality generally in their law firm?

This is really where the challenge sits. It's one of the things that we've talked about and debated a lot. A lot of this is ultimately set with the larger firms. With the general counsel, it goes back. While it's different, there are a lot of similar discussions that we had around the whole cyber issue in the way the firm is going to engage and what it's going to do. It would be very wrong for firms not to have an overarching philosophy or idea about how they want to use it. The thing about it is it's not going to be IT that's going to be showing up and saying, "We ought to use AI." It's going to be business development. It is going to be practice groups that are looking at some of the research tools. That's when a lot of the different tools come into play. We'll need to be in assessment and an understanding of what are we willing to take.

A lot that I've been reading is that the AI stuff is changing incredibly quickly. You can read almost every day about what is going on. There's some value in it. Firms need to start developing an idea about where we're going to engage and where we're going to use it. A long time ago, we were looking at document automation and all. I did a lot of that work at the practice group level. We were looking at very specialized research tools at the practice group level.

AI will begin to influence a lot of those tools that are already out there and a lot of the automation tools that are out there. It may be that the practice group is driving that. You will have some of the other groups coming forward and saying, "I can use AI on the BD side." There will be a responsibility to understand the tools and how they're going to be employed. These things are going to take a lot of resources. Trust me. They're not where you buy the tool, install it, and it's ready to go and work. There will be an incredible amount of understanding and training. It will be very easy for groups to become overwhelmed by the number of tools.

I read the legal tech news and stuff every day. Everyone has an AI tool that they're coming out with. It's part of their solution. To back up the question, they should be talking about this stuff already and building the framework on how they're going to understand, evaluate, and, ultimately, integrate. They have to have resources. I thought it was nice that at the discussion in Austin, you had a lot of chief operating officers with their human resource counterpoints, which are over a lot of the training and all. These tools will be great, but unless you get the user interfaces and what is being termed as these universal interfaces built correctly, if people train to use them, these tools are going to be very frustrating.

This gets back to your point about being replaced. This is not about replacing lawyers at all. This is about providing assistance and efficiency in your ability to go and get information. Whether it's on the research side, consuming internal metrics, or finding out things about clients, these tools will help you do things a lot faster if they're engineered and implemented the right way. That is going to be part of the philosophy and part of what the firm has to understand.

On the small firm side, this gives some of the small firms the ability to engage and do things that they haven't been able to do before because they will be able to acquire some of these efficiency-providing technologies. They can buy it piecemeal. They can buy it out of the Cloud and do things rather than worry about larger solutions. I don't think this is just a big firm issue. Small firms need to be thinking about it, too. They need to approach it a little bit differently and have a different set of people that they're in involvement with.

Have you seen much on the client side either pressure from clients to utilize these tools to be efficient or push back from clients that are nervous about it?

I have read a little bit of both. This is one of the things that one of the reasons that general counsels need to be involved or anybody needs to be involved is having to read outside counsel guidelines that are coming in. This gets back to the cyber issue. Clients will say, "You're not allowed to use AI at all." What does it mean? What kind of AI? There needs to be more detail in that. We can't use generative AI to produce any work for them is very different than saying, "We're going to be using AI to do research," or, "We're going to be using AI on the metrics side to understand more about profitability matters," or something like that. They need to stay away from that.

You'll have other companies that will be coming to firms saying, "You need to be using AI to gain efficiencies." That's a general statement. What does that mean? This is where the document automation piece will come into play. The firms will be able to articulate how they will use it to gain these efficiencies. Most companies already say to us, "We're not going to pay for legal research and all that other stuff." That is an advantage from an operational standpoint.

From the law firm side, there are a lot of things that they can automate and gain efficiencies that will help them. This has been the internal argument. It goes back to my days as CIO. I would go out and look for a piece of technology to bring in. It was very hard to go to maybe one of our department heads or whatever else and say, "I've got a technology that will allow you to bill fewer hours." That argument was a tough one to have. My debate with a lot of those partners at the time was, "If we're not prepared to do it, other firms will be. We at least need to be prepared to engage it."

Firms will be able to do some things quicker. Talking about some of the tools that streamline some of the prompt automation, if you're a net document shop, some of what they do will have prompt automation elements in their tool that will allow you to document assembly faster. You answer a series of questions and it builds the prompts for you so you don't have to. Those kinds of tools will be very good.

Where you're seeing flat fee or fixed fee work, you're having to gain some efficiency there. It will allow you to do it. As I'm talking about this, it raises a lot of the issues about how you're tracking, looking, and monitoring this stuff already. The bottom line is that's where the clients will become better aware of this stuff and where it fits. We'll start asking firms, "What are you doing with it? Why aren't you engaged with it, particularly on the efficiency side of the equation?"

You could get into a whole debate about what this is going to do to the billable hour model. Maybe that's part of this conversation. Most of what you said in response to Jody's question, Tim, was directly relevant to that. As you point out, these models are supposed to make you more efficient. If you're more efficient, then you can bill fewer hours. If you bill fewer hours, then your bills ought to be lower and the client spending ought to be lower for the same amount of work. It's fraught with difficulties for the legal industry generally as a whole because we're so heavily built on the billable hour model.

I've been having the billable hour debate with folks going back to the '90s. Everything we go after the billable hour. It has changed a lot. Technology has made people more efficient in being able to get more done. That is ultimately where there's generative AI or some form that will ultimately help. For everybody to understand, it's not going to change tomorrow. It is one of those evolutionary things that will come into play. It's important to understand it and understand the solutions where they fit. People will have to adapt to it. Over time, firms that are willing to adapt will be better positioned to compete than firms that ignore it. It's like any technology.

I remember when I first brought Blackberries into the firm back in the late '90s. Dennis Van Metre spoke about this on one of his podcasts. It was the easiest project that we had ever done. Dennis is a very good friend. He was the director of the real technical side of the IT organization. He eventually moved into the CIO role as I vacated that role and moved on to other things. He and I were doing a lot of tests to try to make email mobile. We were looking at some of the SkyTel pagers and everything else. Blackberry happened to pop up at an event in Dallas where we were like, "This would be a great tool for us to have." I bought 40 devices directly from Blackberry at the time. We put 20 in Houston and 20 in a newly opened New York office.

Within six months, we had 1,000 devices deployed. It was the easiest project we'd ever done. That was one of those things that made us connected. I bring that up because it was an inflection point. It was a shift in the responsiveness attitude. People had no more, "Why I didn't see that email?" You saw the email. People knew you saw it. We almost had to respond to it. I remember when we brought them into the firm. Immediately, I went on a vacation with my family. I was with my wife and our two young sons at the time. We were sitting by the swimming pool. I had a Blackberry and my wife was like, "What are you doing?" I was as giddy as I could be. I was like, "I'm doing email." Little did I know that that would be the anchor that would weigh me down over the years.

That's so true.

AI is one of those things that will be those kinds of technologies. In three years, we'll be having a completely different conversation about it. It will be an integrated part of what we do.

I believe it.

You'll accept it and use it whether you like it or not. If you, on any of your home stuff, are getting the Office 365 updates, there are new AI elements in there to where it is starting to try to finish sentences as you're typing. In your email or in a Word document, it is saying, "I read this email and you have a task. Do you want to move it to your task list?" Those things are coming already. That was one of the things that I warned firms about. That is coming out more in the updates that are coming with your general updates.

These things are going to become a little bit of a nuisance. People are going to see them and be going, "What is this? How do I need to respond to this?" Some of it, you'll need to dial back into firms because people don't want to know. It will be confusing. It has officially created some issues for firms as far as information management.

We talked about ChatGPT generally. It's what people think of in terms of AI and the use case for it, typing in questions, and so forth. You've mentioned some of what we think of as traditional legal vendors like Thomson Reuters, Lexus, and so forth are rolling out AI features in their products. It does seem it's a natural fit to me to allow those folks who have a lot invested and are spending a lot of money on their AI development to test the water in terms of how that capability could benefit the provision of legal services.

I know, for example, that Thomson Reuters not too long ago acquired Casetext. I know that they're getting ready to roll out their Casetext-type functionality which includes some real what we would think of as AI-type features summarizing, outlining, and that sort of thing within that platform. It seems to me that's a pretty low risk or that's the low bar to entry for law firms. It would be to sign on to something like that and see the real benefit of it, and maybe it would spread from there.

That is right. Looking at these tools and what these organizations are trying to do, there are a lot of things that are being rolled out. They will talk about them being test forms or somewhat experimental or whatever else. It's important to understand the state. Thomson Reuters is one. I talked about the NetDocuments pieces. There are some very valuable tools out there. For firms, it will be having conversations internally about, "Where do we want to go with this? Where do we want to focus on investment?"

What I was talking to the chief operating officers back in September 2023 was, "If you guys are all in the middle of doing your plan development, you probably need to earmark some dollars for some of these tools. These tools are not going to come without some added expense." They're going to fall into the project mix. You're going to have your project managers dedicated to dealing with the training resources to get people using it. There are low-risk elements that firms can bring in and start leveraging. They should do that. That will be part of the education and part of cutting your teeth, so to speak, on this.

Doing some very targeted applications with this and then broadening from there, there are some nice, easy, safe uses out there. The research side will offer a lot. People will have to understand the downsides that they're not perfect. Some of these vendors are doing the fine-tuning necessary to limit the hallucinations that you'll hear about in some of the large language models and narrow it.

1414512a.jpg

Lawyers Implementing AI: Deep learning is really what you see associated with a lot of people who are talking about the large language models and how those are built.

This gets back to what is coming with the regulatory piece and where a lot of the associations are commenting on. You guys as lawyers still have the responsibility to ultimately look at it and make a determination of whether what you're seeing is viable or not. That is really going to be the important piece. A lot of this stuff initially early on is going to be in the assistant phase. It's going to assist and help you. It's not going to replace anything that you do. It will hopefully help you gather information and get things sorted for a client faster. You still will have the responsibility to be comfortable with what you're ultimately producing.

1414512b.jpg

Lawyers Implementing AI: Lawyers still have the responsibility to ultimately look at it and make a determination of whether what they're seeing is viable or not.

It's like hiring a legal assistant. You need to make sure that you're hiring a competent legal assistant with a good background. This is very much the same on a technical side instead of an HR side.

That is a great analogy. Some of these tools even come up and they are legal assistant-type technology. If you look at what some firms are doing, they're looking at this AI document assistant, but it really is focused on what that paralegal or what that project or something would do. That is where maybe some of the work will be displaced at times. It's still up to you to make sure that it's accurate and it's what you want to see.

It really seems like it touches on every aspect of the legal industry. Starting back, you mentioned BD or Business Development. I could see use cases for it all throughout the marketing functions and BD functions of a law firm. We've talked a lot about how it would affect the delivery of legal services being a major factor. There are other functions, too. There's administration, operations, security, and privacy. It's this weird double-edged sword. On the one hand, you're very concerned about privacy and security, but on the other, there are probably AI tools that will make it better and easier for you to monitor your security on your computer systems and so forth.

I'm going to make one comment about the security side and then I will talk about the business development side. This was 2016 or 2017 things that I was thinking about from an AI perspective. I happened to be on the cyber side. We were employing all these tools. We were doing all this monitoring and creating all these logs, which no human would have the time or ability to go back and review all this stuff. You can use AI to do that. That's where the Palo Altos and whatnot are using those tools to interpret that data closer to real-time. Sometimes, they will argue in real time because it's looking for patterns in that data. It's looking for adverse patterns, things that you should highlight. There is good use for that.

Firms have probably already been using AI to some degree without it really being broadcast or people using it. On the BD side, it's a little bit of a double-edged sword where you're producing content or whatever. Maybe you're doing pitches or things like that, I've encouraged people to use it like they're writing their objective in their resume. There are fairly easy ways to engineer a prompt to produce a very coherent, intelligent-sounding objective for a resume. BD people can use that.

The one thing that folks will have to be careful about is if they're using any of the mainstream tools out there, anything that you pose and what it generates is recaptured. That's where a lot of firms are saying, "We want to be careful about using it," or they're restricting use or whatever else. It is because they don't want their information that may be in one of the prompts to end up in one of the public databases. That is one of the concerns. From a BD standpoint, there are great uses there. Anybody that's producing content where it is not a legal opinion or anything like that and it's producing something that somebody's going to read, it's a great tool to help automate that, enhance the content, and make it sound really good.

We see some of the benefits. You described those really well. Efficiency, capitalizing, hopefully, on the firms' knowledge bases. It's easy to see the upside to AI generally and the use of it. We've talked a little bit about some of the downsides and some of the risks. We've talked some about confidentiality. The model is only as good as what you put in. Garbage in, garbage out, and that sort of thing.

You mentioned the concept of hallucinations which we haven't explained. I do want to talk about that in the context of the Mata versus Avianca case that a lot of folks who have tuned into our show and followed the legal news will have heard about. I know you know about it. Talk to us a little bit about Mata and hallucinations generally. Explain the risks of overreliance on ChatGPT or other models.

It is a real risk. I'll move sideways a little bit and talk to another lawyer. This is a litigator that I was having a conversation with. He thought it'd be interesting to go out. He had a case he was working on. He would pose a question to OpenAI and it started citing all these cases and all this stuff. He immediately looked at it and thought, "I missed all this relevant stuff." What he figured out when he started looking into it was that it was making up stuff. It was making cases and references up. None of this stuff really existed. That is the issue with hallucination.

I hate to say this way. There are probably people that would argue with me about it. It is being trained on a lot of stuff and rendered into math. There have been questions about, "These systems learn foreign languages. They're so smart." They're looking at stuff that is fed to them in a foreign language. They're seeing the patterns. At times, they will tend to regurgitate stuff or try to fill a void in speaking. We'll throw out information that they may be correct or may not be.

There is another thing, too, about some of the LLMs being poisoned, which people can go in and specifically engineer an LLM and engineer certain facts into it. That's where the concern around misinformation, is going on. You've got an LLM that may be producing correct information so then all of a sudden, when it throws out a piece of information, you assume it's correct because everything else was. It was an engineered point that was wrong. That is the downside to it.

There are a lot of firms that are like, "I'm going to use my materials. I honestly want to build my own LLM." It gets back to the knowledge information that we were talking about. From many years ago, what are you going to focus it on? You need a good slate of information that you're comfortable training on. You don't want to train it on everything.

Even on the appellate side, and I've spent enough time in legal, there are many versions of a document until it gets fully negotiated. A lot of times, the fully negotiated document, there is the final document but not what you want in your model either. There are some other subsets of that. Figuring out what you're going to put into the model ultimately, which is what will drive the results. I've worked with lawyers for a long time. I've greatly valued my time in legal. They are a skeptical group of people for skeptical reasons.

That is true.

My view is on all this stuff, when it starts spinning things back, lawyers are typically going to look at it and want to scrutinize it. They clearly should for a long time. These things will be moved to more human-like behavior, but it's not human. It's not true intelligence yet. It is the assistant that will help us in a lot of ways, but there are a lot of downsides to it. It's not perfect. There is not a panacea with any of these. There's no way that with what you consume, 100% of it is accurate no matter what you're asking. They will have tendencies to make stuff up and regurgitate stuff. If they were trained with bad data, they would give bad, erroneous, less-than-complete answers.

1414512c.jpg

Lawyers Implementing AI: There's no way that 100% of what you consume from AI is accurate, no matter what you're asking.

I was thinking about what are the takeaways from that Mata case. You've covered a lot of that already. I don't want to recite the facts of the case. Look it up if anybody is not familiar with it who's reading this. Some New York lawyers decided to rely on ChatGPT to draft a motion or a response. It did hallucinate and created false citations. The lawyers wound up getting sanctioned as a result of that.

The real takeaway from that is 1) Don't get in over your head from the start by relying on something you don't understand. In part, I know one of the responses that those guys gave was, "I thought this was a research service that I could plug in and rely on what it gave me." The court didn't find that persuasive. 2) You've got to have human eyes on it. Humanize with the experience to know what looks like could be a hallucination versus what's not. Since you can't tell. Sometimes, that stuff looks really good. That, to me, is what helps to answer the question of whether this is going to result in taking away all our jobs.

Until that software or those programs get to that point where they're hallucination-free where you don't have to worry about poisoning anymore, the answer would seem like it'd be no. It's probably not going to happen over the course of my career. With all that said though, how would you advise lawyers who are looking to dip their toe in the water or create a general approach to adopting AI? We're talking about a wide variety of practices and practice areas. We've talked about solo and large firms. What's the starting point?

It really is engaging this assistant mentality, and it may be doing things that are away from the actual, legal work initially. Some of the things that I was talking about might be a little bit intrusive from the Office 365 standpoint and what's going on with some of the co-piloting elements there. It is saying, "I see a task in your email," or, "There's something you need to deal with here." It is completing the sentences. It will help set up there. There will be a number of tools on the research side.

On the filing side, they will enhance some of the document management systems and all that we're using about where to file stuff for what to organize. Those will be some really good applications. On the research side, it has incredible potential to help at least fast-track. I remember the days when we first started introducing electronic research pieces. There were the lawyers that wanted to have a book in their hands. They wanted to fill that paper, sit down in the library, and do that work. It took a while to transition to where the libraries or not existent. We don't allocate any space to them anymore. Everything is electronic.

The AI elements of their research will be incredibly helpful. Once people are comfortable with it, they will continue to check it. I'll use it, but then I'll probably look a little bit further to make sure that we're not missing anything else. It will be the same thing on the e-discovery piece. You are giving up some level of control. It will take some time for you to be comfortable with that.

Those are the solutions that lawyers will look to. It will be through some of the document automation pieces. We've already introduced some document assembly tools and things like that. AI will play into it this making suggestions. You've got clause banks. This is going a little bit to the transactional side of the house and talking about that. In building those clause banks and whatnot, AI will help streamline some of that.

Over time, people will get more comfortable and more confident in what it's helping to do. It will help streamline those functions of servicing information. What is relevant information in a very directed, focused place will be the initial toolsets. As this evolves, these systems are trained better, and we have more control over the training, and all that, so people will get more comfortable. It's a few years down the road. I don't see it as a replacement. I see it as further assisting in you getting your work done.

The question is does it take away what legal assistants are doing or very junior associates are doing? That is a question that's still yet to be answered. You have to look at the associates. You're getting them in and training them to ultimately become more senior folks or partners in the firm or whatever else. There will still be a path for them. A lot of these tools could impact ultimately what they're doing, but it's not there now.

Are you advising the law firms that you work with on the development of policies regarding the use of AI?

Yes. We are talking about that some. It gets into the regulatory piece, which I'll touch on briefly. It is an evolutionary tool. You're seeing these big blanket, high-level statements made. Biden's come out. The commerce department is going to have an AI function. What I would argue is they really don't know what they're going to do yet. You've got a lot of people who are trying to regulate something that they don't understand at all. You've got these international groups that are coming together to make statements like, "We've got to do this stuff to predict the good of people overall. I'm not sure exactly what that means yet. Let's get into the details of it."

That is one of the things that the firm's going to have to stay in touch with what's going on with the regulatory side of it, and then have this philosophy internally about how we're going to use it. It's one thing to block OpenAI and say, "People can't go to the site and do it." With the offerings that are coming in and tools, firms need to be thinking about, "We subscribed to Lexus already. They're going to have an AI offering. How are we going to consider it? How are we going to integrate it?" These are the conversations firms need to be having. They need to be having conversations about what they're not going to do with it.

They need to be having conversations about developing strategies about how they're going to deal with the outside counsel guidelines that have AI elements in them. Some of it's simply going to be pushing back and seeing you need further clarification of what you're talking about. It's like what we did on the cyber side where they made some very blanket statements. They were difficult to interpret or apply internally. We were like, "What are we going to do?" Firms need to be having those conversations. They need to be thinking about it. They need to have an active dialogue about it.

We're trying to do as much education as we can. There is some involvement. When they do decide to bring a tool in, what do you need to do? There is thinking about how it's coming in, how you're going to integrate it, how you're going to manage the project, how you're going to train it, and how you are going to configure these tools. You've got a lot of different options. What are you going to do? That's every single tool. The overarching philosophy, policies, and all about it are a little bit slow down. That is a good thing, the acquisition and adoption of a lot of these tools if they understand exactly what they're going to do.

The last thing you want to happen inside the firm is for a BD group to say, "We are bringing an AI tool," and for no one to understand what they're doing or what they're bringing in. There is going to have to be a dissection starting with the leadership of the firm. I go to the general counsel, somebody who fills that role that is protecting the firm across these areas. There will need to be IT involvement. You have to dissect all that technology. Does the technology require Cloud technology or not? Is it something that's going to be on on-prem?

There are a great number of questions that are going to come up. Firms are going to need to be prepared to deal with this stuff. The more work they do on the front end of it, the easier it will be to address the solutions when they start coming. Trust me. The request, I can see that already. It is like, "Practice groups that read about a new tool. That's exactly what I need."

1414512d.jpg

Lawyers Implementing AI: The more work they do on the front end of it, the easier it will be to address the solutions when they start coming.

I used to always argue all the time. One of our attorneys on an airplane was looking at the SkyMall catalog. We see some great pieces of technology. We then get the phone call like, "We really need this," and it's nothing that we need at all or we've already got three solutions that play around that. You have never bothered to use any of them. That's in training and education. That is going to be a huge piece of that.

Once the firms decide what they're going to do, it is going to be understanding the interface and how it's trained and developed. There's an argument for firms to go and start engaging in some of the prompt engineering and things like that for people to understand how to engage and use some of these ChatGPT elements. Even if they're not doing the prompts themselves, they will still understand what some of the prompt automation tools are doing. They'll be able to dissect it and work with us on the implementation side better and the assimilation of a lot of these technologies.

It not going away. It's only going to escalate. We're going to see more of it come. It is one of those things that is a little bit overwhelming. I feel a little bit in this call, there are so many directions and to talk about things. There are so many things to consider. It is a very challenging, straightforward, easy, simple concept to bring forward.

You're saying this and in the back of my head, I'm hearing all the potential ethical issues both in terms of technological competence and then thinking about the different ethical areas that AI use impacts across all the various practice groups. That makes your head spin.

The privacy and ethical issues are huge. It is back to that philosophical discussion. It's the reason firms need to be having the conversations before they ever start bringing these tools in. There will be conversations around the cost and all that stuff. I get that, but where the ethical and the privacy pieces come into play is understanding the tools, what they're doing, and where your information is being stored. When you're interfacing with it, what is it doing?

With that, a lot of firms have come out and said, "We built our own ChatGPT." Trust me. It is sitting in the Azure and is locked down. It is in that Azure construct even though it's using some of the OpenAI pieces. It's not escaping that container. Those are some of the technical considerations that firms will have to have when they're talking about any of these solutions. Where does it sit? What does it really do? It is a huge challenge. I wish it was easier. I told the group when I met with them, "I wish I could tell you this is a solution and this is what you do, but it is not that easy."

That's what lawyers want. They're like, "What's the answer, Tim?"

That was really my first slide in that presentation. I was like, "If you're looking for me to tell you that this is a definitive answer in AI, you're not going to get it from me. Not yet. Maybe 2 or 3 years from now, I can at least narrow it down more, but that is very hard to do. It's everywhere and everybody is trying it to some degree."

The perfect, lawyerly answer is, "It depends."

Tim's clearly learned a few things from hanging around with lawyers.

Are there resources that you would recommend to lawyers, whatever their practice setting, who are interested in learning more? You can get online and read all kinds of stuff that's either gloom and doom or does portray it as a panacea. It seems like there's not a lot necessarily in between. Are there some practical guides that you could refer our audience to that you might consult?

On the legal side, I tend to read a lot that comes out in some of the AmLaw tech publications because they tend to be very focused on what lawyers are looking for, the solutions, the efficiency, and the responsibilities. A lot of stuff that I talk about will be reiterated and reinforced there. They'll also talk about the tools that are being released. I'll look at those. I immediately gravitate toward what the technology does and how it really works.

The problem is there's too much out there to try to consume. Lawyers are incredibly busy people. You've got a lot to keep up with. It is hard. I started using some technology resources like TLDR, which is the acronym for Too Long, Didn't Read. There's an AI variant of that which will have a lot of snippets that I will go to and read.

I'll be perfectly honest. I build some Google alerts at times on very specific and narrow regulatory AI pieces and stuff where I want to see things that are being serviced or talked about that I may miss. I use tools like WIRED Magazine. There's not a lot of good central resources. I get information from a couple of different policies From a legal standpoint, firms can look at what's coming out and any of the legal trade publications, whether it be American Lawyer or any subsets of that, any of the Law360 stuff, or anything that touches on AI.

I was reading the article that came out. This really focused on some of the use cases and inefficiencies, which is what will be most relevant. If there are people who want to get in the technical side out there, you can go out there. There are tons of Reddit stuff you can follow. You can get into the GitHub stuff on what is being built on the LLMs and everything else. It is mind-numbingly tedious at times and is way beyond what most people will be interested in doing.

The last point I make on that is anything that talks about the efficiency use cases of it is important as well as the regulatory pieces. Those are what I would focus on the most. Particularly the regulatory piece is evolving very quickly since it has an international focus. In fact, there was a meeting in London with a number of countries represented. Everybody was saying, "We need universal control over this."

There is the concern that AI is going to be this runaway stuff. There have been groups that have gotten together that want to hold this six-month ceasefire for a moratorium on the development of these new frontier models. I don't see this happening. I don't think any of these countries can get good control of it yet, but it's going to be interesting to see what they come out with and how they try to regulate and control it. It can ultimately do more harm than good, but, but we will see. It's not there yet. They're not doing it yet.

Tim, we really appreciate you being on. This is fascinating stuff. It makes my head spin and gives me a little bit of anxiety thinking about all that's out there. As we close out our tradition here, it is to let our guests leave us with a tip or a war story. You've given us plenty of tips. If you have another one, we'd love to hear it. If you have a war story you'd like to share, we'd love to hear that, too.

The war story piece, I probably already covered talking about the Blackberries a little bit before.

That was a good story.

There are a couple of pieces of technology that have left us bruised and battered over the years. That's one. I've got a few around cybersecurity, too. I will tell one. This is crucial to be AI where we get a bunch of lawyers on board. They tell us what they want to do. I will go back to the security side. We were talking about moving to a containerized model on our mobile devices. We had a technology group and a GC group that was very focused on that.

As soon as we did it, they really started understanding what containerized technology meant. People's core play didn't work the way that they used to like it to work. It didn't integrate as much because we were blocking off the sharing of our technology pieces with what was generally iOS at the time. We had to undo some of that, open it up, and let a few folks use some of the older technology so their Apple core plate would work. It's amazing at times. That's one of those things we have to adapt. That is ultimately going to be what happens with AI. There will be pieces that will be really good. There is going to be something out there that people aren't going to like and we'll get hammered with it.

In part, we covered a lot. I started getting focused on AI a lot about March or April 2023. I paid attention to it years before but for a different reason. With ChatGPT and so much coming out, I started to get it. I wanted to understand it at a fundamental level. It was the reason I started breaking apart a lot of the neural nets and things like that. I used to do some coding in my background. I wanted to understand how the coding was built. What I started reading and what I read of late is the people who get coded don't really understand the way that a lot of these neural nets react the way that they do. It made me feel better about what they're doing and the way they're trying to build it.

It is one of those things that is going to continue to evolve very quickly. It is something that you can't ignore and you have to pay attention to. It will have an impact. It is an immersive technology, whether it is on the legal side. You will see it a lot in your personal lives on your iOS devices, Android devices, and the way you interact with your car. Everything that you do that works, it will impact. Understand it in your personal life, and then we're going to have to understand how it impacts and how it applies to our professional lives. It will continue to evolve.

Lawyers need to understand about it is they can't ignore it. They're going to have to pay attention to it, embrace it, and understand it. It is one of those things that you can't say, "AI is not going to impact me in my practice so I'm not going to pay any attention." That is the wrong attitude to have. They need to pay attention to it. You need to be asking questions. If there are things that are happening that you don't understand, ask somebody. Pick up the phone. Talk to somebody on the admin side, whether it be on the tech side or wherever else. Talk to other lawyers. Be curious about it and engage with it. It is going to evolve.

I agree. It will evolve way beyond my professional career, but there's a piece of it I have to be engaged with and deal with during that. At some point, I'll leave it for some younger folks to deal with. It has the potential to be a really good and powerful tool. It is one of those things that is here to stay. It will impact your life every day. You better find ways to accept it, whether you like it or not.

Thank you, Tim. This has been great. We really appreciate it.

That's a great parting thought. We appreciate the time. We'll look forward to learning more as we go.

I appreciate it. It was very good to talk to you both. I look forward to being involved and maybe talking more soon in the future.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.